Open In Colab

Ejercicio 3-a. Transfer learning de modelos convolucionales¶

El apartado a del ejercicio 3 es una continuación del ejercicio 2, donde deberás analizar la técnica de transfer learning sobre el dataset de mascarillas. Por tanto, necesitarás hacer uso de lo visto en la práctica 5.3.

mask

1. Enunciado¶

Empleando el dataset de mascarillas del ejercicio 2, debes probar lo siguiente:

  • Utilizar 3 modelos pre-entrenados para hacer transfer learning mediante extracción de características. Al menos uno de ellos debe de ser un modelo pensado para entornos de poca potencia computacional (pocos parámetros), como por ejemplo MobileNet o EfficientNet.
  • Seleccionando el mejor modelo del punto anterior, realizar fine-tuning sobre algunas capas (la selección del número de capas es a tu elección). Probar al menos 2 combinaciones (por ejemplo, desbloqueando 2 y 7 capas...).
  • Evaluar cuantitativamente cada combinación de modelos con el conjunto de validación/test del dataset original, así como con otro dataset distinto sobre detección de mascarillas (face mask detection), como por ejemplo https://www.kaggle.com/datasets/dipuiucse/facemaskdataset2022. Si el dataset es muy grande, como el propuesto, basta con tomar una pequeña muestra suficiente para sacar estadísticas. Usar distintas métricas: accuracy, precision, recall, F1-score, etc.
  • Evaluar cualitativamente el rendimiento del mejor modelo obtenido con imágenes obtenidas por tu webcam, o bien con imágenes de rostros con/sin mascarillas obtenidas de Internet.

2. Entrega¶

La entrega de este ejercicio se realiza a través de la tarea creada para tal efecto en Enseñanza Virtual. Tienes que entregar un notebook, y el HTML generado a partir de él, cuyas celdas estén ya evaluadas.

La estructura del notebook debe contener los siguientes apartados:

  1. Cabecera: nombre y apellidos.
  2. Preparación de los datos para ser usados en Keras.
  3. Modelos y configuraciones creados en Keras basados en 3 modelos pre-entrenados (un sub-apartado para cada uno, explicando de forma razonada, con tus palabras, por qué se elige el modelo y qué clasificador se añade).
  4. Entrenamiento y evaluación de cada modelo creado (un sub-apartado para cada uno). Análisis de resultados.
  5. Selección del mejor modelo, y fine-tuning de dos combinaciones de capas (con pocas y muchas capas). Análisis de resultados.
  6. Evaluación de los modelos sobre conjunto de test, y sobre otro dataset para face mask detection. Evaluación también sobre imágenes de webcam/Internet.
  7. Bibliografía utilizada (enlaces web, material de clase, libros, etc.).

2.1. Nota importante¶


HONESTIDAD ACADÉMICA Y COPIAS: un trabajo práctico es un examen, por lo que debe realizarse de manera individual. La discusión y el intercambio de información de carácter general con los compañeros se permite (e incluso se recomienda), pero NO AL NIVEL DE CÓDIGO. Igualmente el remitir código de terceros, OBTENIDO A TRAVÉS DE LA RED o cualquier otro medio, se considerará plagio.

Cualquier plagio o compartición de código que se detecte significará automáticamente la calificación de CERO EN LA ASIGNATURA para TODOS los alumnos involucrados. Por tanto a estos alumnos NO se les conservará, para futuras convocatorias, ninguna nota que hubiesen obtenido hasta el momento. SIN PERJUICIO DE OTRAS MEDIDAS DE CARÁCTER DISCIPLINARIO QUE SE PUDIERAN TOMAR.


In [2]:
 

Cabecera
¶

Alfonso Alarcón Tamayo¶

Preparación de los datos
¶

Para la carga de de datos subiremos el conjunto de dataset como un zip a google drive y lo descomprimiremos desde ahí

In [2]:
 
In [3]:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
In [4]:
# Copiamos el dataset y lo decomprimimos
!cp /content/drive/MyDrive/mask_ds.zip .
!unzip -q mask_ds.zip
In [5]:
# Desmontamos
drive.flush_and_unmount()

Modelos y configuraciones creados en Keras basados en 3 modelos pre-entrenados
¶

Como ya hemos visto en las prácticas, cuando se trabaja con un dataset pequeño, es muy aconsejable usar redes ya entrenadas, para reutilizar ese aprendizaje obtenido en ese dataset mas grande en el nuestro, aún habiendo sido entrenado con imagenes muy distintas.

Para los problemas la elección de un modelo es importante ya que una transferencia de aprendizaje adecuada acelera el proceso de aprendizaje y ayuda a nuestro modelo a que generalice

Modelo 1¶

Para este primer modelo he optado por usar VGG16, este es un modelo utilizado para clasificación de imágenes (como es nuestro caso).

He realizado algunas pruebas con imágenes sueltas y la tasa de acierto con usuarios con mascarilla es bastante alta(aunque detecta diferentes tipos de máscara, entiende bien que la persona de la foto lleva algún tipo de máscara en la cara)

El clasificador que he decidido usar en los 3 modelos ha sido la primera opción vista en la práctica (aplicar la base convolucional sobre el dataset y esos datos aplicarlos a un clasificador independiente)

he optado por este ya que es el que menos recursos consume de los dos ya que mi ordenador le lleva bastante tiempo procesar la información

In [6]:
from tensorflow.keras.applications.vgg16 import VGG16, decode_predictions

modeloVGG16 = VGG16()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels.h5
553467096/553467096 [==============================] - 5s 0us/step
In [7]:
from tensorflow.keras.utils import plot_model

conv_base_VGG16 = VGG16(weights='imagenet',
                  include_top=False,
                  input_shape=(150, 150, 3))

plot_model(conv_base_VGG16, to_file='conv_base_plot.png', show_shapes=True, show_layer_names=True)
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58889256/58889256 [==============================] - 0s 0us/step
Out[7]:

Aqui lo que vamos a hacer es transfer learning de los modelos preentrenamos que hemos elegido, es decir, vamos a usar la red ya preentrenada sobre muchos datos para extraer las características relevantes. He usado el ejemplo dado en la práctica 5.3

In [ ]:
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator

base_dir = 'mask_ds'

train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')

datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20

def extract_features(directory, sample_count):
    features = np.zeros(shape=(sample_count, 4, 4, 512))
    labels = np.zeros(shape=(sample_count))
    generator = datagen.flow_from_directory(
        directory,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode='binary')
    i = 0
    for inputs_batch, labels_batch in generator:
        features_batch = conv_base_VGG16.predict(inputs_batch)
        features[i * batch_size : (i + 1) * batch_size] = features_batch
        labels[i * batch_size : (i + 1) * batch_size] = labels_batch
        i += 1
        if i * batch_size >= sample_count:
            # sin el break estaria cargando indefinidamente
            break
    return features, labels

train_features, train_labels = extract_features(train_dir, 2000)
validation_features, validation_labels = extract_features(validation_dir, 1000)
test_features, test_labels = extract_features(test_dir, 1000)
In [9]:
train_features = np.reshape(train_features, (2000, 4 * 4 * 512))
validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512))
test_features=np.reshape(test_features, (1000, 4 * 4 * 512))

Aquí lo que he hecho ha sido aplanar las características para poder usarlas en una capa densa en mi modelo

Modelo 2¶

Para este segundo modelo he optado por usar VGG19, al igual que con el modelo anterior he probado con varias imágenes del conjunto y detecta bien las mascarillas (le da un alto porcentaje a la probailidad de que tengan una máscara).

El modelo VGG19 es un modelo entrenado para clasificación y he estado investigando por internet y fue muy usado (y se sigue usando) durante la pandemia para detectar si la gente usaba mascarilla,por lo que lo he usado este modelo preentrenado como segunda opción

In [10]:
from tensorflow.keras.applications.vgg19 import VGG19
modeloVGG19 = VGG19()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels.h5
574710816/574710816 [==============================] - 7s 0us/step
In [11]:
conv_base_VGG19 = VGG19(weights='imagenet',
                  include_top=False,
                  input_shape=(150, 150, 3))

plot_model(conv_base_VGG19, to_file='conv_base_plot2.png', show_shapes=True, show_layer_names=True)
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg19/vgg19_weights_tf_dim_ordering_tf_kernels_notop.h5
80134624/80134624 [==============================] - 0s 0us/step
Out[11]:
In [12]:
def extract_features_VGG19(directory, sample_count):
    features = np.zeros(shape=(sample_count, 4, 4, 512))
    labels = np.zeros(shape=(sample_count))
    generator = datagen.flow_from_directory(
        directory,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode='binary')
    i = 0
    for inputs_batch, labels_batch in generator:
        features_batch = conv_base_VGG19.predict(inputs_batch)
        features[i * batch_size : (i + 1) * batch_size] = features_batch
        labels[i * batch_size : (i + 1) * batch_size] = labels_batch
        i += 1
        if i * batch_size >= sample_count:
            # sin el break estaria infinitamente cargando valores
            break
    return features, labels

train_features_VGG19, train_labels_VGG19 = extract_features_VGG19(train_dir, 2000)
validation_features_VGG19, validation_labels_VGG19 = extract_features_VGG19(validation_dir, 1000)
test_features_VGG19, test_labels_VGG19 = extract_features_VGG19(test_dir, 1000)
Found 4433 images belonging to 2 classes.
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 9s 9s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
Found 2267 images belonging to 2 classes.
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 10s 10s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 8s 8s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
Found 1960 images belonging to 2 classes.
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 7s 7s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
In [13]:
train_features_VGG19 = np.reshape(train_features_VGG19, (2000, 4 * 4 * 512))
validation_features_VGG19 = np.reshape(validation_features_VGG19, (1000, 4 * 4 * 512))
test_features_VGG19 = np.reshape(test_features_VGG19, (1000, 4 * 4 * 512))

Modelo 3¶

Para el tercer y último modelo he utilizado EfficientNetB0, este modelo lo he usado porque en teoría nos piden usar un modelo que requiera de poca potencia(menos parámetros que los otros).

En este modelo podemos ver que el número de capas es mucho mayor a los dos anteriores, en lo que se basa este modelo es en encontrar un conjunto óptimo que minimice los recursos utilizados mientras maximiza el rendimiento

In [14]:
from tensorflow.keras.applications import EfficientNetB0
modelEfficientNetB0 = EfficientNetB0()
Downloading data from https://storage.googleapis.com/keras-applications/efficientnetb0.h5
21834768/21834768 [==============================] - 0s 0us/step
In [15]:
conv_base_Eff = EfficientNetB0(weights='imagenet',
                  include_top=False,
                  input_shape=(150, 150, 3))

plot_model(conv_base_Eff, to_file='conv_base_plot3.png', show_shapes=True, show_layer_names=True)
Downloading data from https://storage.googleapis.com/keras-applications/efficientnetb0_notop.h5
16705208/16705208 [==============================] - 0s 0us/step
Out[15]:
In [16]:
def extract_features_Eff(directory, sample_count):
    features = np.zeros(shape=(sample_count, 5,5 , 1280))
    labels = np.zeros(shape=(sample_count))
    generator = datagen.flow_from_directory(
        directory,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode='binary')
    i = 0
    for inputs_batch, labels_batch in generator:
        features_batch = conv_base_Eff.predict(inputs_batch)
        features[i * batch_size : (i + 1) * batch_size] = features_batch
        labels[i * batch_size : (i + 1) * batch_size] = labels_batch
        i += 1
        if i * batch_size >= sample_count:
            break
    return features, labels

train_features_Eff, train_labels_Eff = extract_features_Eff(train_dir, 2000)
validation_features_Eff, validation_labels_Eff = extract_features_Eff(validation_dir, 1000)
test_features_Eff, test_labels_Eff = extract_features_Eff(test_dir, 1000)
Found 4433 images belonging to 2 classes.
1/1 [==============================] - 2s 2s/step
1/1 [==============================] - 0s 484ms/step
1/1 [==============================] - 1s 504ms/step
1/1 [==============================] - 0s 473ms/step
1/1 [==============================] - 0s 430ms/step
1/1 [==============================] - 0s 434ms/step
1/1 [==============================] - 0s 481ms/step
1/1 [==============================] - 1s 754ms/step
1/1 [==============================] - 1s 769ms/step
1/1 [==============================] - 1s 787ms/step
1/1 [==============================] - 1s 560ms/step
1/1 [==============================] - 0s 488ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 0s 475ms/step
1/1 [==============================] - 0s 444ms/step
1/1 [==============================] - 0s 447ms/step
1/1 [==============================] - 0s 451ms/step
1/1 [==============================] - 1s 506ms/step
1/1 [==============================] - 0s 472ms/step
1/1 [==============================] - 0s 464ms/step
1/1 [==============================] - 0s 469ms/step
1/1 [==============================] - 0s 464ms/step
1/1 [==============================] - 0s 484ms/step
1/1 [==============================] - 0s 446ms/step
1/1 [==============================] - 0s 439ms/step
1/1 [==============================] - 0s 434ms/step
1/1 [==============================] - 1s 718ms/step
1/1 [==============================] - 1s 742ms/step
1/1 [==============================] - 1s 717ms/step
1/1 [==============================] - 1s 726ms/step
1/1 [==============================] - 0s 491ms/step
1/1 [==============================] - 0s 428ms/step
1/1 [==============================] - 0s 498ms/step
1/1 [==============================] - 0s 470ms/step
1/1 [==============================] - 0s 479ms/step
1/1 [==============================] - 0s 491ms/step
1/1 [==============================] - 0s 497ms/step
1/1 [==============================] - 0s 490ms/step
1/1 [==============================] - 0s 486ms/step
1/1 [==============================] - 0s 474ms/step
1/1 [==============================] - 1s 504ms/step
1/1 [==============================] - 0s 432ms/step
1/1 [==============================] - 0s 497ms/step
1/1 [==============================] - 0s 468ms/step
1/1 [==============================] - 1s 501ms/step
1/1 [==============================] - 1s 528ms/step
1/1 [==============================] - 1s 730ms/step
1/1 [==============================] - 1s 869ms/step
1/1 [==============================] - 1s 633ms/step
1/1 [==============================] - 0s 478ms/step
1/1 [==============================] - 0s 475ms/step
1/1 [==============================] - 0s 427ms/step
1/1 [==============================] - 0s 472ms/step
1/1 [==============================] - 0s 487ms/step
1/1 [==============================] - 1s 530ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 0s 483ms/step
1/1 [==============================] - 0s 490ms/step
1/1 [==============================] - 0s 463ms/step
1/1 [==============================] - 0s 444ms/step
1/1 [==============================] - 0s 448ms/step
1/1 [==============================] - 0s 491ms/step
1/1 [==============================] - 0s 480ms/step
1/1 [==============================] - 0s 498ms/step
1/1 [==============================] - 1s 767ms/step
1/1 [==============================] - 1s 795ms/step
1/1 [==============================] - 1s 796ms/step
1/1 [==============================] - 1s 787ms/step
1/1 [==============================] - 0s 462ms/step
1/1 [==============================] - 0s 486ms/step
1/1 [==============================] - 0s 479ms/step
1/1 [==============================] - 0s 482ms/step
1/1 [==============================] - 0s 457ms/step
1/1 [==============================] - 0s 458ms/step
1/1 [==============================] - 0s 485ms/step
1/1 [==============================] - 0s 490ms/step
1/1 [==============================] - 0s 483ms/step
1/1 [==============================] - 0s 489ms/step
1/1 [==============================] - 0s 476ms/step
1/1 [==============================] - 0s 494ms/step
1/1 [==============================] - 1s 501ms/step
1/1 [==============================] - 0s 497ms/step
1/1 [==============================] - 1s 591ms/step
1/1 [==============================] - 1s 803ms/step
1/1 [==============================] - 1s 787ms/step
1/1 [==============================] - 1s 626ms/step
1/1 [==============================] - 0s 481ms/step
1/1 [==============================] - 0s 467ms/step
1/1 [==============================] - 0s 468ms/step
1/1 [==============================] - 0s 488ms/step
1/1 [==============================] - 0s 482ms/step
1/1 [==============================] - 0s 471ms/step
1/1 [==============================] - 0s 488ms/step
1/1 [==============================] - 0s 481ms/step
1/1 [==============================] - 0s 485ms/step
1/1 [==============================] - 1s 507ms/step
1/1 [==============================] - 0s 491ms/step
1/1 [==============================] - 1s 542ms/step
1/1 [==============================] - 1s 506ms/step
1/1 [==============================] - 0s 500ms/step
Found 2267 images belonging to 2 classes.
1/1 [==============================] - 1s 584ms/step
1/1 [==============================] - 1s 796ms/step
1/1 [==============================] - 1s 770ms/step
1/1 [==============================] - 1s 818ms/step
1/1 [==============================] - 1s 681ms/step
1/1 [==============================] - 1s 569ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 0s 491ms/step
1/1 [==============================] - 0s 493ms/step
1/1 [==============================] - 1s 502ms/step
1/1 [==============================] - 0s 480ms/step
1/1 [==============================] - 0s 463ms/step
1/1 [==============================] - 1s 522ms/step
1/1 [==============================] - 0s 477ms/step
1/1 [==============================] - 1s 530ms/step
1/1 [==============================] - 0s 478ms/step
1/1 [==============================] - 1s 513ms/step
1/1 [==============================] - 0s 498ms/step
1/1 [==============================] - 1s 510ms/step
1/1 [==============================] - 1s 733ms/step
1/1 [==============================] - 1s 835ms/step
1/1 [==============================] - 1s 820ms/step
1/1 [==============================] - 1s 801ms/step
1/1 [==============================] - 1s 502ms/step
1/1 [==============================] - 1s 520ms/step
1/1 [==============================] - 1s 512ms/step
1/1 [==============================] - 0s 489ms/step
1/1 [==============================] - 0s 479ms/step
1/1 [==============================] - 0s 479ms/step
1/1 [==============================] - 1s 514ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 0s 492ms/step
1/1 [==============================] - 0s 476ms/step
1/1 [==============================] - 0s 484ms/step
1/1 [==============================] - 1s 552ms/step
1/1 [==============================] - 0s 469ms/step
1/1 [==============================] - 1s 532ms/step
1/1 [==============================] - 1s 647ms/step
1/1 [==============================] - 1s 781ms/step
1/1 [==============================] - 1s 782ms/step
1/1 [==============================] - 1s 570ms/step
1/1 [==============================] - 1s 510ms/step
1/1 [==============================] - 0s 472ms/step
1/1 [==============================] - 1s 512ms/step
1/1 [==============================] - 0s 490ms/step
1/1 [==============================] - 1s 509ms/step
1/1 [==============================] - 0s 484ms/step
1/1 [==============================] - 1s 519ms/step
1/1 [==============================] - 0s 477ms/step
1/1 [==============================] - 1s 501ms/step
Found 1960 images belonging to 2 classes.
1/1 [==============================] - 0s 489ms/step
1/1 [==============================] - 0s 483ms/step
1/1 [==============================] - 0s 474ms/step
1/1 [==============================] - 0s 497ms/step
1/1 [==============================] - 0s 500ms/step
1/1 [==============================] - 1s 825ms/step
1/1 [==============================] - 1s 772ms/step
1/1 [==============================] - 1s 754ms/step
1/1 [==============================] - 1s 742ms/step
1/1 [==============================] - 1s 526ms/step
1/1 [==============================] - 0s 476ms/step
1/1 [==============================] - 1s 505ms/step
1/1 [==============================] - 0s 469ms/step
1/1 [==============================] - 1s 508ms/step
1/1 [==============================] - 0s 478ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 0s 494ms/step
1/1 [==============================] - 1s 517ms/step
1/1 [==============================] - 0s 480ms/step
1/1 [==============================] - 1s 522ms/step
1/1 [==============================] - 1s 520ms/step
1/1 [==============================] - 1s 510ms/step
1/1 [==============================] - 0s 475ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 1s 524ms/step
1/1 [==============================] - 1s 707ms/step
1/1 [==============================] - 1s 763ms/step
1/1 [==============================] - 1s 893ms/step
1/1 [==============================] - 1s 626ms/step
1/1 [==============================] - 0s 488ms/step
1/1 [==============================] - 0s 496ms/step
1/1 [==============================] - 1s 531ms/step
1/1 [==============================] - 1s 505ms/step
1/1 [==============================] - 1s 505ms/step
1/1 [==============================] - 0s 495ms/step
1/1 [==============================] - 0s 484ms/step
1/1 [==============================] - 0s 483ms/step
1/1 [==============================] - 0s 469ms/step
1/1 [==============================] - 0s 490ms/step
1/1 [==============================] - 0s 477ms/step
1/1 [==============================] - 0s 468ms/step
1/1 [==============================] - 0s 477ms/step
1/1 [==============================] - 0s 486ms/step
1/1 [==============================] - 1s 590ms/step
1/1 [==============================] - 1s 765ms/step
1/1 [==============================] - 1s 877ms/step
1/1 [==============================] - 1s 829ms/step
1/1 [==============================] - 1s 761ms/step
1/1 [==============================] - 1s 623ms/step
1/1 [==============================] - 1s 520ms/step
In [17]:
train_features_Eff = np.reshape(train_features_Eff, (2000, 5 *5 * 1280))
validation_features_Eff = np.reshape(validation_features_Eff, (1000, 5 * 5 * 1280))
test_features_Eff = np.reshape(test_features_Eff, (1000, 5 * 5 * 1280))

Entrenamiento y evaluación de cada modelo creado (un sub-apartado para cada uno). Análisis de resultados.
¶

Modelo 1¶

En este primer modelo podemos ver que tenemos un acierto muy bueno tanto en el conjunto de entrenamiento como en el de validación (prácticamente al 100% en entrenamiento y cerca del 98 en validación).Se puede observar que conforme avanzas las epocas, se sobreajusta un poco.

He añadido dos capas con 512 y 256 neuronas con activación relu con el objetivo de que al añadir más neuronas, al disponer de un conjunto de datos no excesivamente grande quizás esta configuración ha provocado el sobreajuste que se puede ver en la gráfica

In [18]:
from keras import models
from keras import layers
from keras import optimizers

model = models.Sequential()
model.add(layers.Dense(512, activation='relu', input_dim = 4 * 4 * 512))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))

model.compile(optimizer=optimizers.RMSprop(lr=1e-3),
              loss='binary_crossentropy',
              metrics=['acc'])

history = model.fit(train_features, train_labels,
                    epochs=30,
                    batch_size=20,
                    validation_data=(validation_features, validation_labels))
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.RMSprop.
Epoch 1/30
100/100 [==============================] - 5s 45ms/step - loss: 0.6469 - acc: 0.8150 - val_loss: 0.3163 - val_acc: 0.8690
Epoch 2/30
100/100 [==============================] - 5s 47ms/step - loss: 0.1863 - acc: 0.9335 - val_loss: 0.3722 - val_acc: 0.8670
Epoch 3/30
100/100 [==============================] - 5s 51ms/step - loss: 0.1383 - acc: 0.9555 - val_loss: 0.1552 - val_acc: 0.9330
Epoch 4/30
100/100 [==============================] - 4s 41ms/step - loss: 0.1211 - acc: 0.9595 - val_loss: 0.2237 - val_acc: 0.9230
Epoch 5/30
100/100 [==============================] - 5s 45ms/step - loss: 0.1054 - acc: 0.9685 - val_loss: 0.0774 - val_acc: 0.9670
Epoch 6/30
100/100 [==============================] - 6s 55ms/step - loss: 0.0851 - acc: 0.9735 - val_loss: 0.0995 - val_acc: 0.9660
Epoch 7/30
100/100 [==============================] - 4s 42ms/step - loss: 0.0700 - acc: 0.9780 - val_loss: 0.1901 - val_acc: 0.9470
Epoch 8/30
100/100 [==============================] - 5s 51ms/step - loss: 0.0698 - acc: 0.9755 - val_loss: 0.0990 - val_acc: 0.9580
Epoch 9/30
100/100 [==============================] - 7s 71ms/step - loss: 0.0592 - acc: 0.9820 - val_loss: 0.1908 - val_acc: 0.9490
Epoch 10/30
100/100 [==============================] - 5s 50ms/step - loss: 0.0447 - acc: 0.9875 - val_loss: 0.1404 - val_acc: 0.9680
Epoch 11/30
100/100 [==============================] - 7s 66ms/step - loss: 0.0424 - acc: 0.9875 - val_loss: 0.1118 - val_acc: 0.9760
Epoch 12/30
100/100 [==============================] - 5s 49ms/step - loss: 0.0340 - acc: 0.9905 - val_loss: 0.1218 - val_acc: 0.9660
Epoch 13/30
100/100 [==============================] - 5s 53ms/step - loss: 0.0434 - acc: 0.9895 - val_loss: 0.5347 - val_acc: 0.9140
Epoch 14/30
100/100 [==============================] - 7s 75ms/step - loss: 0.0373 - acc: 0.9905 - val_loss: 0.2641 - val_acc: 0.9580
Epoch 15/30
100/100 [==============================] - 5s 46ms/step - loss: 0.0495 - acc: 0.9860 - val_loss: 0.1037 - val_acc: 0.9700
Epoch 16/30
100/100 [==============================] - 5s 52ms/step - loss: 0.0218 - acc: 0.9940 - val_loss: 0.2967 - val_acc: 0.9490
Epoch 17/30
100/100 [==============================] - 5s 51ms/step - loss: 0.0228 - acc: 0.9955 - val_loss: 0.2278 - val_acc: 0.9550
Epoch 18/30
100/100 [==============================] - 4s 44ms/step - loss: 0.0339 - acc: 0.9945 - val_loss: 0.2092 - val_acc: 0.9630
Epoch 19/30
100/100 [==============================] - 5s 51ms/step - loss: 0.0107 - acc: 0.9940 - val_loss: 0.1710 - val_acc: 0.9660
Epoch 20/30
100/100 [==============================] - 5s 46ms/step - loss: 0.0168 - acc: 0.9960 - val_loss: 0.1787 - val_acc: 0.9660
Epoch 21/30
100/100 [==============================] - 4s 43ms/step - loss: 0.0032 - acc: 0.9990 - val_loss: 0.2399 - val_acc: 0.9560
Epoch 22/30
100/100 [==============================] - 5s 51ms/step - loss: 0.0319 - acc: 0.9925 - val_loss: 0.2246 - val_acc: 0.9610
Epoch 23/30
100/100 [==============================] - 5s 49ms/step - loss: 0.0347 - acc: 0.9950 - val_loss: 0.5995 - val_acc: 0.9370
Epoch 24/30
100/100 [==============================] - 4s 43ms/step - loss: 0.0172 - acc: 0.9955 - val_loss: 0.2458 - val_acc: 0.9590
Epoch 25/30
100/100 [==============================] - 6s 57ms/step - loss: 0.0101 - acc: 0.9965 - val_loss: 0.3053 - val_acc: 0.9490
Epoch 26/30
100/100 [==============================] - 6s 58ms/step - loss: 0.0137 - acc: 0.9965 - val_loss: 0.5817 - val_acc: 0.9460
Epoch 27/30
100/100 [==============================] - 5s 48ms/step - loss: 0.0488 - acc: 0.9900 - val_loss: 0.2703 - val_acc: 0.9580
Epoch 28/30
100/100 [==============================] - 6s 62ms/step - loss: 0.0268 - acc: 0.9940 - val_loss: 0.1830 - val_acc: 0.9650
Epoch 29/30
100/100 [==============================] - 5s 54ms/step - loss: 0.0058 - acc: 0.9975 - val_loss: 0.2008 - val_acc: 0.9690
Epoch 30/30
100/100 [==============================] - 6s 56ms/step - loss: 0.0073 - acc: 0.9975 - val_loss: 0.2812 - val_acc: 0.9670
In [19]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt

ent_acc = history.history['acc']
val_acc = history.history['val_acc']
ent_loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(ent_acc))

plt.plot(epochs, ent_acc, 'yo', label='Entrenamiento')
plt.plot(epochs, val_acc, 'y', label='Validación')
plt.title('Accuracy en Entrenamiento y Validación')
plt.legend()

plt.figure()

plt.plot(epochs, ent_loss, 'yo', label='Entrenamiento')
plt.plot(epochs, val_loss, 'y', label='Validación')
plt.title('Pérdida en Entrenamiento y Validación')
plt.legend()

plt.show()

Modelo 2¶

Este segundo modelo es usando VGG19, en este podemos ver que desde las primeras epocas de adapta mejor y que aunque disminuye minimamente el rendimiento (alrededor de un 1%) tiene un poco menos de sobreajuste y es mas suave la curva.

La pérdida tambien en ambos casos es menor y esta la curva mas suavizada(en el anterior es más irregular)

Podemos ver aqui comparado con el primero que con un número menor de neuronas y capas, el modelo en validación desde el principio es mas regular.

In [20]:
from keras import models
from keras import layers
from keras import optimizers

model2 = models.Sequential()
model2.add(layers.Dense(256, activation='relu'))
model2.add(layers.Dropout(0.5))
model2.add(layers.Dense(1, activation='sigmoid'))

model2.compile(optimizer=optimizers.RMSprop(learning_rate=2e-5),
              loss='binary_crossentropy',
              metrics=['acc'])

history2 = model2.fit(train_features_VGG19, train_labels_VGG19,
                      epochs=30,
                      batch_size=50,
                      validation_data=(validation_features_VGG19, validation_labels_VGG19))
Epoch 1/30
40/40 [==============================] - 2s 33ms/step - loss: 0.5162 - acc: 0.7385 - val_loss: 0.4637 - val_acc: 0.7550
Epoch 2/30
40/40 [==============================] - 1s 36ms/step - loss: 0.3435 - acc: 0.8635 - val_loss: 0.3828 - val_acc: 0.8000
Epoch 3/30
40/40 [==============================] - 2s 42ms/step - loss: 0.2679 - acc: 0.8960 - val_loss: 0.3104 - val_acc: 0.8580
Epoch 4/30
40/40 [==============================] - 2s 40ms/step - loss: 0.2239 - acc: 0.9210 - val_loss: 0.2449 - val_acc: 0.8960
Epoch 5/30
40/40 [==============================] - 1s 31ms/step - loss: 0.1967 - acc: 0.9270 - val_loss: 0.2243 - val_acc: 0.9080
Epoch 6/30
40/40 [==============================] - 1s 28ms/step - loss: 0.1683 - acc: 0.9345 - val_loss: 0.2262 - val_acc: 0.9020
Epoch 7/30
40/40 [==============================] - 1s 27ms/step - loss: 0.1516 - acc: 0.9460 - val_loss: 0.2258 - val_acc: 0.8980
Epoch 8/30
40/40 [==============================] - 1s 27ms/step - loss: 0.1476 - acc: 0.9500 - val_loss: 0.1710 - val_acc: 0.9300
Epoch 9/30
40/40 [==============================] - 1s 27ms/step - loss: 0.1313 - acc: 0.9505 - val_loss: 0.1984 - val_acc: 0.9090
Epoch 10/30
40/40 [==============================] - 1s 27ms/step - loss: 0.1261 - acc: 0.9545 - val_loss: 0.1828 - val_acc: 0.9140
Epoch 11/30
40/40 [==============================] - 1s 31ms/step - loss: 0.1156 - acc: 0.9615 - val_loss: 0.1600 - val_acc: 0.9320
Epoch 12/30
40/40 [==============================] - 1s 27ms/step - loss: 0.1102 - acc: 0.9650 - val_loss: 0.1817 - val_acc: 0.9140
Epoch 13/30
40/40 [==============================] - 1s 27ms/step - loss: 0.1019 - acc: 0.9670 - val_loss: 0.1560 - val_acc: 0.9310
Epoch 14/30
40/40 [==============================] - 2s 38ms/step - loss: 0.0996 - acc: 0.9645 - val_loss: 0.1858 - val_acc: 0.9130
Epoch 15/30
40/40 [==============================] - 2s 40ms/step - loss: 0.0957 - acc: 0.9665 - val_loss: 0.1328 - val_acc: 0.9490
Epoch 16/30
40/40 [==============================] - 2s 42ms/step - loss: 0.0908 - acc: 0.9725 - val_loss: 0.1451 - val_acc: 0.9400
Epoch 17/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0852 - acc: 0.9715 - val_loss: 0.1453 - val_acc: 0.9370
Epoch 18/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0841 - acc: 0.9745 - val_loss: 0.1604 - val_acc: 0.9260
Epoch 19/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0762 - acc: 0.9760 - val_loss: 0.1258 - val_acc: 0.9540
Epoch 20/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0743 - acc: 0.9755 - val_loss: 0.1362 - val_acc: 0.9450
Epoch 21/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0703 - acc: 0.9785 - val_loss: 0.1283 - val_acc: 0.9450
Epoch 22/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0717 - acc: 0.9780 - val_loss: 0.1349 - val_acc: 0.9440
Epoch 23/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0656 - acc: 0.9820 - val_loss: 0.1375 - val_acc: 0.9410
Epoch 24/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0642 - acc: 0.9820 - val_loss: 0.1219 - val_acc: 0.9510
Epoch 25/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0606 - acc: 0.9850 - val_loss: 0.1105 - val_acc: 0.9520
Epoch 26/30
40/40 [==============================] - 2s 39ms/step - loss: 0.0611 - acc: 0.9820 - val_loss: 0.1189 - val_acc: 0.9510
Epoch 27/30
40/40 [==============================] - 2s 39ms/step - loss: 0.0579 - acc: 0.9820 - val_loss: 0.1279 - val_acc: 0.9450
Epoch 28/30
40/40 [==============================] - 2s 42ms/step - loss: 0.0563 - acc: 0.9860 - val_loss: 0.1362 - val_acc: 0.9420
Epoch 29/30
40/40 [==============================] - 1s 26ms/step - loss: 0.0516 - acc: 0.9855 - val_loss: 0.1468 - val_acc: 0.9370
Epoch 30/30
40/40 [==============================] - 1s 26ms/step - loss: 0.0507 - acc: 0.9865 - val_loss: 0.1306 - val_acc: 0.9450
In [21]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt

ent_acc = history2.history['acc']
val_acc = history2.history['val_acc']
ent_loss = history2.history['loss']
val_loss = history2.history['val_loss']

epochs = range(len(ent_acc))

plt.plot(epochs, ent_acc, 'ro', label='Entrenamiento')
plt.plot(epochs, val_acc, 'r', label='Validación')
plt.title('Accuracy en Entrenamiento y Validación')
plt.legend()

plt.figure()

plt.plot(epochs, ent_loss, 'ro', label='Entrenamiento')
plt.plot(epochs, val_loss, 'r', label='Validación')
plt.title('Pérdida en Entrenamiento y Validación')
plt.legend()

plt.show()

Modelo 3¶

In [21]:
 

Para este tercer omodelo estamos usando EfficientNetB0, como ya he comentado antes es un modelo especializado en tratar con pocos elementos, en este caso he añadido un par de capas más que el ejemplo anterior.

Podemos ver en la gráfica que el modelo esta "anclado" no mejora con el paso de las épocas y el rendimiento no es bueno (65% y 50% respectivamente en entrenamiento y validación)

In [22]:
from keras import models
from keras import layers
from keras import optimizers

model3 = models.Sequential()
model3.add(layers.Dense(256, activation='relu'))
model3.add(layers.Dropout(0.5))
model3.add(layers.Dense(128, activation='relu'))
model3.add(layers.Dropout(0.5))
model3.add(layers.Dense(64, activation='relu'))
model3.add(layers.Dropout(0.5))
model3.add(layers.Dense(1, activation='sigmoid'))

model3.compile(optimizer=optimizers.RMSprop(lr=2e-5),
              loss='binary_crossentropy',
              metrics=['acc'])

history3 = model3.fit(train_features_Eff, train_labels_Eff,
                    epochs=20,
                    batch_size=40,
                    validation_data=(validation_features_Eff, validation_labels_Eff))
WARNING:absl:`lr` is deprecated in Keras optimizer, please use `learning_rate` or use the legacy optimizer, e.g.,tf.keras.optimizers.legacy.RMSprop.
Epoch 1/20
50/50 [==============================] - 8s 150ms/step - loss: 1.7258 - acc: 0.6060 - val_loss: 0.6953 - val_acc: 0.5080
Epoch 2/20
50/50 [==============================] - 6s 121ms/step - loss: 0.6672 - acc: 0.6340 - val_loss: 0.7045 - val_acc: 0.5080
Epoch 3/20
50/50 [==============================] - 9s 174ms/step - loss: 0.6598 - acc: 0.6340 - val_loss: 0.7167 - val_acc: 0.5080
Epoch 4/20
50/50 [==============================] - 7s 139ms/step - loss: 0.6575 - acc: 0.6340 - val_loss: 0.7219 - val_acc: 0.5080
Epoch 5/20
50/50 [==============================] - 6s 129ms/step - loss: 0.6576 - acc: 0.6340 - val_loss: 0.7266 - val_acc: 0.5080
Epoch 6/20
50/50 [==============================] - 5s 105ms/step - loss: 0.6679 - acc: 0.6320 - val_loss: 0.7248 - val_acc: 0.5080
Epoch 7/20
50/50 [==============================] - 9s 188ms/step - loss: 0.6571 - acc: 0.6340 - val_loss: 0.7234 - val_acc: 0.5080
Epoch 8/20
50/50 [==============================] - 6s 114ms/step - loss: 0.6588 - acc: 0.6340 - val_loss: 0.7217 - val_acc: 0.5080
Epoch 9/20
50/50 [==============================] - 5s 100ms/step - loss: 0.6587 - acc: 0.6340 - val_loss: 0.7224 - val_acc: 0.5080
Epoch 10/20
50/50 [==============================] - 6s 129ms/step - loss: 0.6574 - acc: 0.6340 - val_loss: 0.7248 - val_acc: 0.5080
Epoch 11/20
50/50 [==============================] - 7s 133ms/step - loss: 0.6583 - acc: 0.6340 - val_loss: 0.7233 - val_acc: 0.5080
Epoch 12/20
50/50 [==============================] - 6s 130ms/step - loss: 0.6596 - acc: 0.6340 - val_loss: 0.7203 - val_acc: 0.5080
Epoch 13/20
50/50 [==============================] - 5s 98ms/step - loss: 0.6585 - acc: 0.6340 - val_loss: 0.7234 - val_acc: 0.5080
Epoch 14/20
50/50 [==============================] - 6s 120ms/step - loss: 0.6566 - acc: 0.6340 - val_loss: 0.7256 - val_acc: 0.5080
Epoch 15/20
50/50 [==============================] - 8s 154ms/step - loss: 0.6581 - acc: 0.6340 - val_loss: 0.7225 - val_acc: 0.5080
Epoch 16/20
50/50 [==============================] - 6s 112ms/step - loss: 0.6568 - acc: 0.6340 - val_loss: 0.7261 - val_acc: 0.5080
Epoch 17/20
50/50 [==============================] - 8s 161ms/step - loss: 0.6592 - acc: 0.6340 - val_loss: 0.7234 - val_acc: 0.5080
Epoch 18/20
50/50 [==============================] - 7s 143ms/step - loss: 0.6571 - acc: 0.6340 - val_loss: 0.7270 - val_acc: 0.5080
Epoch 19/20
50/50 [==============================] - 8s 155ms/step - loss: 0.6581 - acc: 0.6340 - val_loss: 0.7244 - val_acc: 0.5080
Epoch 20/20
50/50 [==============================] - 5s 97ms/step - loss: 0.6587 - acc: 0.6340 - val_loss: 0.7240 - val_acc: 0.5080
In [23]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt

ent_acc = history3.history['acc']
val_acc = history3.history['val_acc']
ent_loss = history3.history['loss']
val_loss = history3.history['val_loss']

epochs = range(len(ent_acc))

plt.plot(epochs, ent_acc, 'bo', label='Entrenamiento')
plt.plot(epochs, val_acc, 'b', label='Validación')
plt.title('Accuracy en Entrenamiento y Validación')
plt.legend()

plt.figure()

plt.plot(epochs, ent_loss, 'bo', label='Entrenamiento')
plt.plot(epochs, val_loss, 'b', label='Validación')
plt.title('Pérdida en Entrenamiento y Validación')
plt.legend()

plt.show()

Selección del mejor modelo, y fine-tuning de dos combinaciones de capas (con pocas y muchas capas). Análisis de resultados.
¶

El mejor modelo que hemos encontrado ha sido el segundo (el que hemos usado VGG19) obteniendo alrededor de un 97% de acierto.

Vamos a aplicarle fine-tunning con muchas y pocas capas.

Aplicar fine-tunning consiste en descongelar algunas capas de la base preentrenada (VGG19) en nuestro caso y entrenar conjuntamente las descongeladas con las añadidas

In [24]:
conv_base_VGG19.summary()
Model: "vgg19"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_4 (InputLayer)        [(None, 150, 150, 3)]     0         
                                                                 
 block1_conv1 (Conv2D)       (None, 150, 150, 64)      1792      
                                                                 
 block1_conv2 (Conv2D)       (None, 150, 150, 64)      36928     
                                                                 
 block1_pool (MaxPooling2D)  (None, 75, 75, 64)        0         
                                                                 
 block2_conv1 (Conv2D)       (None, 75, 75, 128)       73856     
                                                                 
 block2_conv2 (Conv2D)       (None, 75, 75, 128)       147584    
                                                                 
 block2_pool (MaxPooling2D)  (None, 37, 37, 128)       0         
                                                                 
 block3_conv1 (Conv2D)       (None, 37, 37, 256)       295168    
                                                                 
 block3_conv2 (Conv2D)       (None, 37, 37, 256)       590080    
                                                                 
 block3_conv3 (Conv2D)       (None, 37, 37, 256)       590080    
                                                                 
 block3_conv4 (Conv2D)       (None, 37, 37, 256)       590080    
                                                                 
 block3_pool (MaxPooling2D)  (None, 18, 18, 256)       0         
                                                                 
 block4_conv1 (Conv2D)       (None, 18, 18, 512)       1180160   
                                                                 
 block4_conv2 (Conv2D)       (None, 18, 18, 512)       2359808   
                                                                 
 block4_conv3 (Conv2D)       (None, 18, 18, 512)       2359808   
                                                                 
 block4_conv4 (Conv2D)       (None, 18, 18, 512)       2359808   
                                                                 
 block4_pool (MaxPooling2D)  (None, 9, 9, 512)         0         
                                                                 
 block5_conv1 (Conv2D)       (None, 9, 9, 512)         2359808   
                                                                 
 block5_conv2 (Conv2D)       (None, 9, 9, 512)         2359808   
                                                                 
 block5_conv3 (Conv2D)       (None, 9, 9, 512)         2359808   
                                                                 
 block5_conv4 (Conv2D)       (None, 9, 9, 512)         2359808   
                                                                 
 block5_pool (MaxPooling2D)  (None, 4, 4, 512)         0         
                                                                 
=================================================================
Total params: 20024384 (76.39 MB)
Trainable params: 20024384 (76.39 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

vamos a descongelar y entrenar desde la base block5 y ver como afecta al rendimiento

In [25]:
conv_base_VGG19.trainable = True

train = False
# recorre las capas y a partir de la block5_conv1 descongela 
for layer in conv_base_VGG19.layers:
    if layer.name == 'block5_conv1':
        train = True
    if train:
        layer.trainable = True
    else:
        layer.trainable = False
In [26]:
model2.compile(optimizer=optimizers.RMSprop(learning_rate=2e-5),
              loss='binary_crossentropy',
              metrics=['acc'])

history2 = model2.fit(train_features_VGG19, train_labels_VGG19,
                      epochs=30,
                      batch_size=50,
                      validation_data=(validation_features_VGG19, validation_labels_VGG19))
Epoch 1/30
40/40 [==============================] - 2s 32ms/step - loss: 0.0496 - acc: 0.9870 - val_loss: 0.1195 - val_acc: 0.9500
Epoch 2/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0475 - acc: 0.9875 - val_loss: 0.1067 - val_acc: 0.9520
Epoch 3/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0433 - acc: 0.9900 - val_loss: 0.1007 - val_acc: 0.9560
Epoch 4/30
40/40 [==============================] - 2s 38ms/step - loss: 0.0432 - acc: 0.9900 - val_loss: 0.1144 - val_acc: 0.9520
Epoch 5/30
40/40 [==============================] - 2s 39ms/step - loss: 0.0426 - acc: 0.9895 - val_loss: 0.1099 - val_acc: 0.9510
Epoch 6/30
40/40 [==============================] - 2s 42ms/step - loss: 0.0415 - acc: 0.9890 - val_loss: 0.1016 - val_acc: 0.9530
Epoch 7/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0388 - acc: 0.9915 - val_loss: 0.1171 - val_acc: 0.9480
Epoch 8/30
40/40 [==============================] - 1s 31ms/step - loss: 0.0394 - acc: 0.9900 - val_loss: 0.1154 - val_acc: 0.9500
Epoch 9/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0352 - acc: 0.9940 - val_loss: 0.1009 - val_acc: 0.9540
Epoch 10/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0373 - acc: 0.9925 - val_loss: 0.1112 - val_acc: 0.9530
Epoch 11/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0360 - acc: 0.9915 - val_loss: 0.1041 - val_acc: 0.9520
Epoch 12/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0333 - acc: 0.9930 - val_loss: 0.1111 - val_acc: 0.9530
Epoch 13/30
40/40 [==============================] - 1s 28ms/step - loss: 0.0345 - acc: 0.9930 - val_loss: 0.0976 - val_acc: 0.9550
Epoch 14/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0311 - acc: 0.9940 - val_loss: 0.1025 - val_acc: 0.9540
Epoch 15/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0303 - acc: 0.9930 - val_loss: 0.1001 - val_acc: 0.9550
Epoch 16/30
40/40 [==============================] - 2s 39ms/step - loss: 0.0308 - acc: 0.9940 - val_loss: 0.1006 - val_acc: 0.9540
Epoch 17/30
40/40 [==============================] - 2s 41ms/step - loss: 0.0271 - acc: 0.9970 - val_loss: 0.1124 - val_acc: 0.9490
Epoch 18/30
40/40 [==============================] - 2s 42ms/step - loss: 0.0258 - acc: 0.9965 - val_loss: 0.0985 - val_acc: 0.9550
Epoch 19/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0238 - acc: 0.9965 - val_loss: 0.1006 - val_acc: 0.9560
Epoch 20/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0262 - acc: 0.9950 - val_loss: 0.0949 - val_acc: 0.9540
Epoch 21/30
40/40 [==============================] - 2s 41ms/step - loss: 0.0263 - acc: 0.9940 - val_loss: 0.1001 - val_acc: 0.9550
Epoch 22/30
40/40 [==============================] - 2s 45ms/step - loss: 0.0226 - acc: 0.9970 - val_loss: 0.0913 - val_acc: 0.9590
Epoch 23/30
40/40 [==============================] - 2s 42ms/step - loss: 0.0234 - acc: 0.9960 - val_loss: 0.0957 - val_acc: 0.9550
Epoch 24/30
40/40 [==============================] - 2s 45ms/step - loss: 0.0236 - acc: 0.9955 - val_loss: 0.1022 - val_acc: 0.9560
Epoch 25/30
40/40 [==============================] - 1s 35ms/step - loss: 0.0212 - acc: 0.9965 - val_loss: 0.0981 - val_acc: 0.9560
Epoch 26/30
40/40 [==============================] - 2s 60ms/step - loss: 0.0220 - acc: 0.9975 - val_loss: 0.0988 - val_acc: 0.9560
Epoch 27/30
40/40 [==============================] - 3s 74ms/step - loss: 0.0211 - acc: 0.9985 - val_loss: 0.0922 - val_acc: 0.9550
Epoch 28/30
40/40 [==============================] - 2s 50ms/step - loss: 0.0201 - acc: 0.9970 - val_loss: 0.1039 - val_acc: 0.9540
Epoch 29/30
40/40 [==============================] - 2s 55ms/step - loss: 0.0187 - acc: 0.9980 - val_loss: 0.0950 - val_acc: 0.9560
Epoch 30/30
40/40 [==============================] - 1s 29ms/step - loss: 0.0182 - acc: 0.9970 - val_loss: 0.0984 - val_acc: 0.9560
In [27]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt

ent_acc = history2.history['acc']
val_acc = history2.history['val_acc']
ent_loss = history2.history['loss']
val_loss = history2.history['val_loss']

epochs = range(len(ent_acc))

plt.plot(epochs, ent_acc, 'ro', label='Entrenamiento')
plt.plot(epochs, val_acc, 'r', label='Validación')
plt.title('Accuracy en Entrenamiento y Validación')
plt.legend()

plt.figure()

plt.plot(epochs, ent_loss, 'ro', label='Entrenamiento')
plt.plot(epochs, val_loss, 'r', label='Validación')
plt.title('Pérdida en Entrenamiento y Validación')
plt.legend()
Out[27]:
<matplotlib.legend.Legend at 0x7f3ced0f7340>

En este primer caso podemos ver (aunque la gráfica no sea muy intuitiva) que obtenemos una leve mejora en el rendimiento y menos pérdida: Entrenamiento 98->99.8% Validación 95->95.6

Aquí hemos descongelado solo algunas de las últimas capas y las hemos entrenado con el resto del modelo que hemos creado,hemos obtenido una leve mejora puede ser debido a quede esta manera estamos haciendo que se adapte mejor a nuestro problema especificamente

In [28]:
conv_base_VGG19.trainable = True

train = False
for layer in conv_base_VGG19.layers:
    if layer.name == 'block4_conv1':
        train = True
    if train:
        layer.trainable = True
    else:
        layer.trainable = False
In [29]:
model2.compile(optimizer=optimizers.RMSprop(learning_rate=2e-5),  # Corrige lr a learning_rate
              loss='binary_crossentropy',
              metrics=['acc'])

history2 = model2.fit(train_features_VGG19, train_labels_VGG19,
                      epochs=30,
                      batch_size=50,
                      validation_data=(validation_features_VGG19, validation_labels_VGG19))
Epoch 1/30
40/40 [==============================] - 2s 32ms/step - loss: 0.0201 - acc: 0.9960 - val_loss: 0.0929 - val_acc: 0.9570
Epoch 2/30
40/40 [==============================] - 1s 28ms/step - loss: 0.0161 - acc: 0.9980 - val_loss: 0.0982 - val_acc: 0.9570
Epoch 3/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0172 - acc: 0.9970 - val_loss: 0.1003 - val_acc: 0.9550
Epoch 4/30
40/40 [==============================] - 1s 27ms/step - loss: 0.0164 - acc: 0.9985 - val_loss: 0.1083 - val_acc: 0.9480
Epoch 5/30
40/40 [==============================] - 1s 31ms/step - loss: 0.0161 - acc: 0.9985 - val_loss: 0.0896 - val_acc: 0.9560
Epoch 6/30
40/40 [==============================] - 2s 58ms/step - loss: 0.0166 - acc: 0.9975 - val_loss: 0.1062 - val_acc: 0.9500
Epoch 7/30
40/40 [==============================] - 3s 63ms/step - loss: 0.0162 - acc: 0.9990 - val_loss: 0.1098 - val_acc: 0.9490
Epoch 8/30
40/40 [==============================] - 3s 68ms/step - loss: 0.0157 - acc: 0.9980 - val_loss: 0.1001 - val_acc: 0.9560
Epoch 9/30
40/40 [==============================] - 2s 52ms/step - loss: 0.0153 - acc: 0.9990 - val_loss: 0.0951 - val_acc: 0.9570
Epoch 10/30
40/40 [==============================] - 1s 35ms/step - loss: 0.0139 - acc: 0.9990 - val_loss: 0.0961 - val_acc: 0.9570
Epoch 11/30
40/40 [==============================] - 2s 41ms/step - loss: 0.0132 - acc: 0.9980 - val_loss: 0.1012 - val_acc: 0.9560
Epoch 12/30
40/40 [==============================] - 2s 45ms/step - loss: 0.0130 - acc: 0.9990 - val_loss: 0.0994 - val_acc: 0.9560
Epoch 13/30
40/40 [==============================] - 2s 52ms/step - loss: 0.0143 - acc: 0.9990 - val_loss: 0.0998 - val_acc: 0.9560
Epoch 14/30
40/40 [==============================] - 2s 59ms/step - loss: 0.0120 - acc: 0.9990 - val_loss: 0.1051 - val_acc: 0.9560
Epoch 15/30
40/40 [==============================] - 2s 51ms/step - loss: 0.0121 - acc: 1.0000 - val_loss: 0.0946 - val_acc: 0.9560
Epoch 16/30
40/40 [==============================] - 2s 43ms/step - loss: 0.0113 - acc: 0.9990 - val_loss: 0.0975 - val_acc: 0.9570
Epoch 17/30
40/40 [==============================] - 2s 48ms/step - loss: 0.0112 - acc: 0.9995 - val_loss: 0.1019 - val_acc: 0.9560
Epoch 18/30
40/40 [==============================] - 1s 35ms/step - loss: 0.0109 - acc: 1.0000 - val_loss: 0.0973 - val_acc: 0.9580
Epoch 19/30
40/40 [==============================] - 1s 35ms/step - loss: 0.0104 - acc: 0.9990 - val_loss: 0.1111 - val_acc: 0.9510
Epoch 20/30
40/40 [==============================] - 2s 40ms/step - loss: 0.0101 - acc: 0.9995 - val_loss: 0.1035 - val_acc: 0.9560
Epoch 21/30
40/40 [==============================] - 2s 54ms/step - loss: 0.0102 - acc: 0.9990 - val_loss: 0.0903 - val_acc: 0.9580
Epoch 22/30
40/40 [==============================] - 2s 61ms/step - loss: 0.0100 - acc: 1.0000 - val_loss: 0.1104 - val_acc: 0.9500
Epoch 23/30
40/40 [==============================] - 2s 55ms/step - loss: 0.0107 - acc: 0.9990 - val_loss: 0.0964 - val_acc: 0.9570
Epoch 24/30
40/40 [==============================] - 1s 37ms/step - loss: 0.0093 - acc: 1.0000 - val_loss: 0.0955 - val_acc: 0.9580
Epoch 25/30
40/40 [==============================] - 1s 37ms/step - loss: 0.0099 - acc: 0.9995 - val_loss: 0.0948 - val_acc: 0.9570
Epoch 26/30
40/40 [==============================] - 1s 32ms/step - loss: 0.0081 - acc: 1.0000 - val_loss: 0.0980 - val_acc: 0.9570
Epoch 27/30
40/40 [==============================] - 1s 35ms/step - loss: 0.0092 - acc: 1.0000 - val_loss: 0.0944 - val_acc: 0.9580
Epoch 28/30
40/40 [==============================] - 1s 34ms/step - loss: 0.0083 - acc: 1.0000 - val_loss: 0.0969 - val_acc: 0.9580
Epoch 29/30
40/40 [==============================] - 1s 33ms/step - loss: 0.0076 - acc: 1.0000 - val_loss: 0.0968 - val_acc: 0.9580
Epoch 30/30
40/40 [==============================] - 2s 59ms/step - loss: 0.0076 - acc: 1.0000 - val_loss: 0.1032 - val_acc: 0.9560
In [30]:
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt

ent_acc = history2.history['acc']
val_acc = history2.history['val_acc']
ent_loss = history2.history['loss']
val_loss = history2.history['val_loss']

epochs = range(len(ent_acc))

plt.plot(epochs, ent_acc, 'ro', label='Entrenamiento')
plt.plot(epochs, val_acc, 'r', label='Validación')
plt.title('Accuracy en Entrenamiento y Validación')
plt.legend()

plt.figure()

plt.plot(epochs, ent_loss, 'ro', label='Entrenamiento')
plt.plot(epochs, val_loss, 'r', label='Validación')
plt.title('Pérdida en Entrenamiento y Validación')
plt.legend()
Out[30]:
<matplotlib.legend.Legend at 0x7f3cecd0d180>

Podemos suavizar un poco la curva como con el metodo smooth_curve visto en clase en la práctica 5

In [31]:
def smooth_curve(points, factor=0.8):
  smoothed_points = []
  for point in points:
    if smoothed_points:
      previous = smoothed_points[-1]
      smoothed_points.append(previous * factor + point * (1 - factor))
    else:
      smoothed_points.append(point)
  return smoothed_points

plt.plot(epochs,
         smooth_curve(ent_acc), 'bo', label='Acc Entrenamiento Suavizada')
plt.plot(epochs,
         smooth_curve(val_acc), 'b', label='Acc Validación Suavizada')
plt.title('Accuracy en Entrenamiento y Validación')
plt.legend()

plt.figure()

plt.plot(epochs,
         smooth_curve(ent_loss), 'bo', label='Pérdida Entrenamiento Suavizada')
plt.plot(epochs,
         smooth_curve(val_loss), 'b', label='Pérdida Validación Suavizada')
plt.title('Pérdida en Entrenamiento y Validación')
plt.legend()

plt.show()
In [31]:
 

En este segundo caso noto que el modelo se sobreajusta un poco más, ya que en el entrenamiento alcanza el 100% mientrás que en validación empeora minimamente. Quizás al descongelar y entrenar tantas capas he provocado que el modelo se adapte demasiado a mi modelo y provoque ese sobreajuste, o quizás al no ser el modelo muy grande con tantas capas provoca también el sobreajuste.

Evaluación de los modelos sobre conjunto de test, y sobre otro dataset para face mask detection
¶

Vamos a comprobar varios valores en concreto aaccuracy,precision,recall y f1 asi que voy a comentar brevemente que nos da esa información.

  • Accuracy : es como su nombre indica el número de aciertos
  • precision/perdida: cuanto se equivoca el modelo con respecto a su valor
  • recall: Se calcula como el número de verdaderos positivos dividido por la suma de verdaderos positivos y falsos negativos, si es cercado a 1 indica que el modelo es bueno clasificando
  • f1:Calcula la precisión pero usando de manera conjunta la precisión y el recall
In [36]:
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

predictions = model.predict(test_features)

# Convertimos en etiquetas binarias
rounded_predictions = np.round(predictions)

# F1-score
f1 = f1_score(test_labels, rounded_predictions)
recall = recall_score(test_labels, rounded_predictions)


# Evaluar el modelo
test_loss, test_acc = model.evaluate(test_features, test_labels)
print('Test accuracy:', test_acc)
print('test_loss:', test_loss)
print('Test F1-score:', f1)
print('Recall:', recall)
32/32 [==============================] - 1s 32ms/step
32/32 [==============================] - 1s 36ms/step - loss: 0.2742 - acc: 0.9640
Test accuracy: 0.9639999866485596
test_loss: 0.2742430567741394
Test F1-score: 0.96709323583181
Recall: 0.9742173112338858

Podemos ver que para el modelo 1 hemos obtenido una precisión del 96.4% teniendo en cuenta que en validación obtuvimos un 96.7 % podemos decir que es un buen rendimiento para el test, ya que es próximo al rendimiento obtenido en los otros casos. Como añadido podemos ver que la perdida es baja (indicativo de un buen modelo). Para los valores de f1 y recall al ser próximos a 1 nos indica que el modelo es bueno clasificando

In [37]:
predictions2 = model2.predict(test_features_VGG19)

rounded_predictions = np.round(predictions2)

# F1-score
f1 = f1_score(test_labels, rounded_predictions)
recall = recall_score(test_labels, rounded_predictions)


# Evaluar el modelo
test_loss, test_acc = model2.evaluate(test_features_VGG19, test_labels_VGG19)
print('Test accuracy:', test_acc)
print('test_loss:', test_loss)
print('Test F1-score:', f1)
print('Recall:', recall)
32/32 [==============================] - 1s 15ms/step
32/32 [==============================] - 1s 11ms/step - loss: 0.0900 - acc: 0.9630
Test accuracy: 0.9629999995231628
test_loss: 0.09000647068023682
Test F1-score: 0.5459558823529412
Recall: 0.5469613259668509

Para nuestro segundo modelo obtenemos un rendimiento similar, la precisión alrededor de un 96%, aunque en este caso la pérdida es menor,la pérdida es una medida de cuánto se desvían las predicciones del modelo de las etiquetas reales en el conjunto de pruebas. Aunque en este caso las métricas del F1 y el recall son más bajas

In [38]:
predictions3 = model3.predict(test_features_Eff)


rounded_predictions = np.round(predictions3)

# Calcular el F1-score
f1 = f1_score(test_labels, rounded_predictions)
recall = recall_score(test_labels, rounded_predictions)


# Evaluar el modelo
test_loss, test_acc = model3.evaluate(test_features_Eff, test_labels_Eff)
print('Test accuracy:', test_acc)
print('test_loss:', test_loss)
print('Test F1-score:', f1)
print('Recall:', recall)
32/32 [==============================] - 1s 30ms/step
32/32 [==============================] - 1s 18ms/step - loss: 0.7197 - acc: 0.5160
Test accuracy: 0.515999972820282
test_loss: 0.7197336554527283
Test F1-score: 0.7038237200259235
Recall: 1.0

Nuevo conjunto¶

Ahora para el conjunto nuevo haremos lo mismo que al principio, de drive obtendremos las imagenes de un subconjunto que he obtenido(el del enlace de la entrega que recomiendan usar) y he reducido su tamaño manteniendo la proporción ya que era muy grande

In [33]:
from google.colab import drive
drive.mount('/content/drive')
Mounted at /content/drive
In [39]:
# Copiamos el dataset y lo decomprimimos
!cp /content/drive/MyDrive/Face_Mask_Dataset.zip .
!unzip -q Face_Mask_Dataset.zip
In [40]:
# Desmontamos
drive.flush_and_unmount()
In [41]:
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator

base_dir = 'Face_Mask_Dataset'

train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')

datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20

def extract_features2(directory, sample_count):
    features = np.zeros(shape=(sample_count, 4, 4, 512))
    labels = np.zeros(shape=(sample_count))
    generator = datagen.flow_from_directory(
        directory,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode='binary')
    i = 0
    for inputs_batch, labels_batch in generator:
        features_batch = conv_base_VGG16.predict(inputs_batch)
        features[i * batch_size : (i + 1) * batch_size] = features_batch
        labels[i * batch_size : (i + 1) * batch_size] = labels_batch
        i += 1
        if i * batch_size >= sample_count:
            break
    return features, labels

test_features, test_labels = extract_features2(test_dir, 1000)
Found 2073 images belonging to 2 classes.
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 8s 8s/step
1/1 [==============================] - 12s 12s/step
1/1 [==============================] - 8s 8s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 6s 6s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 5s 5s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
1/1 [==============================] - 4s 4s/step
In [43]:
test_features=np.reshape(test_features, (1000, 4 * 4 * 512))

Ahora que hemos extraido las características vamos a comprobar las métricas anteriores para este nuevo conjunto

Prueba con el modelo 1: VGG16

In [47]:
predictions = model.predict(test_features)

# Convertimos las probabilidades en etiquetas binarias
rounded_predictions = np.round(predictions)

f1 = f1_score(test_labels, rounded_predictions)
recall = recall_score(test_labels, rounded_predictions)

# Evaluar el modelo original
test_loss, test_acc = model.evaluate(test_features, test_labels)
print('Test accuracy:', test_acc)
print('Test loss:', test_loss)
print('Test F1-score:', f1)
print('Recall:', recall)
32/32 [==============================] - 0s 10ms/step
32/32 [==============================] - 0s 11ms/step - loss: 1.0188 - acc: 0.9150
Test accuracy: 0.9150000214576721
Test loss: 1.0188205242156982
Test F1-score: 0.9177153920619554
Recall: 0.9978947368421053

Para este caso vemos que aunque la pérdida sea algo mayor obtenemos un buen rendimiento (91.5%). Además los valores de f1 y recall son bastante mejores que en el caso anterior, en general, se nos muestra que para este nuevo conjunto de mascarillas el modelo generado da un buen resultado

In [ ]:
 

Prueba con VGG19

In [52]:
predictions = model.predict(test_features)

# Convertir las probabilidades en etiquetas binarias
rounded_predictions = np.round(predictions)

f1 = f1_score(test_labels, rounded_predictions)
recall = recall_score(test_labels, rounded_predictions)

# Evaluar el modelo original
test_loss, test_acc = model2.evaluate(test_features, test_labels)
print('Test accuracy:', test_acc)
print('Test loss:', test_loss)
print('Test F1-score:', f1)
print('Recall:', recall)
32/32 [==============================] - 1s 25ms/step
32/32 [==============================] - 1s 14ms/step - loss: 0.3543 - acc: 0.8520
Test accuracy: 0.8519999980926514
Test loss: 0.35433080792427063
Test F1-score: 0.9177153920619554
Recall: 0.9978947368421053

Para este segundo modelo el rendimiento ha decaido hasta un 85%, aunque tiene un f1 y recall , indicando que cuando clasifica bien una imagen como positiva(mascarilla) esta bastante seguro de la fiabilidad de esta

Bibliografía utilizada (enlaces web, material de clase, libros, etc.).
¶

https://colab.research.google.com/github/miguelamda/DL/blob/master/5.%20Redes%20Convolucionales/Practica5.3.%20CNN%20Preentrenadas.ipynb#scrollTo=3iMLhsATxRWx

https://www.v7labs.com/blog/f1-score-guide#:~:text=for%20Machine%20Learning-,What%20is%20F1%20score%3F,prediction%20across%20the%20entire%20dataset.

https://medium.com/@dhirajchaudhari481/vgg16-19-best-performing-convnet-models-in-computer-vision-eeca3fa34788

https://www.kaggle.com/code/greynolan/face-mask-detection-vgg19/notebook

https://d2l.ai/chapter_computer-vision/fine-tuning.html